184 research outputs found

    Look No Further: Adapting the Localization Sensory Window to the Temporal Characteristics of the Environment

    Full text link
    Many localization algorithms use a spatiotemporal window of sensory information in order to recognize spatial locations, and the length of this window is often a sensitive parameter that must be tuned to the specifics of the application. This letter presents a general method for environment-driven variation of the length of the spatiotemporal window based on searching for the most significant localization hypothesis, to use as much context as is appropriate but not more. We evaluate this approach on benchmark datasets using visual and Wi-Fi sensor modalities and a variety of sensory comparison front-ends under in-order and out-of-order traversals of the environment. Our results show that the system greatly reduces the maximum distance traveled without localization compared to a fixed-length approach while achieving competitive localization accuracy, and our proposed method achieves this performance without deployment-time tuning.Comment: Pre-print of article appearing in 2017 IEEE Robotics and Automation Letters. v2: incorporated reviewer feedbac

    Beyond Browsing and Reading: The Open Work of Digital Scholarly Editions

    Get PDF
    INKE’s Modelling and Prototyping group is currently motivated by the following research questions:  How do we model and enable context within the electronic scholarly edition?   And how do we engage knowledge-building communities and capture process, dialogue and connections in and around the electronic scholarly edition?  NewRadial is a prototype scholarly edition environment developed to address such queries.  It argues for the unification of primary texts, secondary scholarship and related knowledge communities, and re-presents the digital scholarly edition as a social edition, an open work and shared space where users collaboratively explore, sort, group, annotate and contribute to secondary scholarship creation

    Learning Deployable Navigation Policies at Kilometer Scale from a Single Traversal

    Full text link
    Model-free reinforcement learning has recently been shown to be effective at learning navigation policies from complex image input. However, these algorithms tend to require large amounts of interaction with the environment, which can be prohibitively costly to obtain on robots in the real world. We present an approach for efficiently learning goal-directed navigation policies on a mobile robot, from only a single coverage traversal of recorded data. The navigation agent learns an effective policy over a diverse action space in a large heterogeneous environment consisting of more than 2km of travel, through buildings and outdoor regions that collectively exhibit large variations in visual appearance, self-similarity, and connectivity. We compare pretrained visual encoders that enable precomputation of visual embeddings to achieve a throughput of tens of thousands of transitions per second at training time on a commodity desktop computer, allowing agents to learn from millions of trajectories of experience in a matter of hours. We propose multiple forms of computationally efficient stochastic augmentation to enable the learned policy to generalise beyond these precomputed embeddings, and demonstrate successful deployment of the learned policy on the real robot without fine tuning, despite environmental appearance differences at test time. The dataset and code required to reproduce these results and apply the technique to other datasets and robots is made publicly available at rl-navigation.github.io/deployable

    Evaluation and Long-Term Outcomes of Cardiac Toxicity in Paediatric Cancer Patients

    Get PDF
    Paediatric cancer survival rates have increased dramatically in the last 20 years. With decreased mortality comes increased long-term morbidity. Cardiovascular disease is the leading cause of secondary morbidity and mortality of childhood cancer survivors. The most common chemotherapeutic agents in treatment regimens are implicated in chemotherapy-induced cardiomyopathy. The clinical presentation is rarely uniform and may manifest in symptoms besides chest pain, shortness of breath or decreased exercise tolerance. In addition to symptomatic patients, asymptomatic patients are especially important to screen as the effects of cardiac toxicity are reversible if caught early. There are new techniques more sensitive than traditional 2D echocardiography ejection fraction that may lead to earlier detection of cardiac dysfunction. Treatment methods have changed little in the recent past with the exception of miniaturization of support devices allowing for cardiac recovery or bridge to cardiac transplant

    Evaluating task-agnostic exploration for fixed-batch learning of arbitrary future tasks

    Full text link
    Deep reinforcement learning has been shown to solve challenging tasks where large amounts of training experience is available, usually obtained online while learning the task. Robotics is a significant potential application domain for many of these algorithms, but generating robot experience in the real world is expensive, especially when each task requires a lengthy online training procedure. Off-policy algorithms can in principle learn arbitrary tasks from a diverse enough fixed dataset. In this work, we evaluate popular exploration methods by generating robotics datasets for the purpose of learning to solve tasks completely offline without any further interaction in the real world. We present results on three popular continuous control tasks in simulation, as well as continuous control of a high-dimensional real robot arm. Code documenting all algorithms, experiments, and hyper-parameters is available at https://github.com/qutrobotlearning/batchlearning

    Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments

    Full text link
    A robot that can carry out a natural-language instruction has been a dream since before the Jetsons cartoon series imagined a life of leisure mediated by a fleet of attentive robot helpers. It is a dream that remains stubbornly distant. However, recent advances in vision and language methods have made incredible progress in closely related areas. This is significant because a robot interpreting a natural-language navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering. Both tasks can be interpreted as visually grounded sequence-to-sequence translation problems, and many of the same methods are applicable. To enable and encourage the application of vision and language methods to the problem of interpreting visually-grounded navigation instructions, we present the Matterport3D Simulator -- a large-scale reinforcement learning environment based on real imagery. Using this simulator, which can in future support a range of embodied vision and language tasks, we provide the first benchmark dataset for visually-grounded natural language navigation in real buildings -- the Room-to-Room (R2R) dataset.Comment: CVPR 2018 Spotlight presentatio

    A new approach to generating research-quality phenology data: The USA National Phenology Monitoring System

    Get PDF
    The USA National Phenology Network (www.usanpn.org) has recently initiated a national effort to encourage people at different levels of expertise—from backyard naturalists to professional scientists—to observe phenology and contribute to a national database that will be used to greatly improve our understanding of spatio-temporal variation in phenology and associated phenological responses to climate change. Many phenological observation protocols identify specific single dates at which individual phenological events are observed, but the scientific usefulness of long-term phenological observations can be improved with a more carefully structured protocol. At the USA-NPN we have developed a new approach that directs observers to record each day that they observe an individual plant, and to assess and report the state of specific life stages (or phenophases) as occurring or not occurring on that plant for each observation date. Observations of animal phenophases are similarly recorded, although for a species as a whole rather than for a specific individual. Evaluation is phrased in terms of simple, easy-to-understand, questions (e.g. “Do you see open flowers?”) which makes it appropriate for a broad audience. From this method, a rich dataset of phenological metrics can be extracted, including the duration of a phenophase (e.g. open flowers), the beginning and end points of a phenophase (e.g. traditional phenological events such as first flower and end flowering), multiple distinct occurrences of phenophases within a single growing season (e.g multiple flowering events, common in drought-prone regions), as well as quantification of sampling frequency and observational uncertainties. The system also includes a mechanism for translation of phenophase start and end points into standard traditional phenological events to facilitate comparison of contemporary data collected with this new “phenophase status” monitoring approach to historical datasets collected with the “phenological event” monitoring approach. These features greatly enhance the utility of the resulting data for statistical analyses addressing questions such as how phenological events vary in time and space, and in response to global change
    corecore